105 research outputs found

    Performance analysis of a generalized upset detection procedure

    Get PDF
    A general procedure for upset detection in complex systems, called the data block capture and analysis upset monitoring process is described and analyzed. The process consists of repeatedly recording a fixed amount of data from a set of predetermined observation lines of the system being monitored (i.e., capturing a block of data), and then analyzing the captured block in an attempt to determine whether the system is functioning correctly. The algorithm which analyzes the data blocks can be characterized in terms of the amount of time it requires to examine a given length data block to ascertain the existence of features/conditions that have been predetermined to characterize the upset-free behavior of the system. The performance of linear, quadratic, and logarithmic data analysis algorithms is rigorously characterized in terms of three performance measures: (1) the probability of correctly detecting an upset; (2) the expected number of false alarms; and (3) the expected latency in detecting upsets

    Efficient diagnosis of multiprocessor systems under probabilistic models

    Get PDF
    The problem of fault diagnosis in multiprocessor systems is considered under a probabilistic fault model. The focus is on minimizing the number of tests that must be conducted in order to correctly diagnose the state of every processor in the system with high probability. A diagnosis algorithm that can correctly diagnose the state of every processor with probability approaching one in a class of systems performing slightly greater than a linear number of tests is presented. A nearly matching lower bound on the number of tests required to achieve correct diagnosis in arbitrary systems is also proven. Lower and upper bounds on the number of tests required for regular systems are also presented. A class of regular systems which includes hypercubes is shown to be correctly diagnosable with high probability. In all cases, the number of tests required under this probabilistic model is shown to be significantly less than under a bounded-size fault set model. Because the number of tests that must be conducted is a measure of the diagnosis overhead, these results represent a dramatic improvement in the performance of system-level diagnosis techniques

    Fault injector for middleware applications

    Get PDF
    Issued as final reportRaytheon Compan

    Minimum Information Disclosure with Efficiently Verifiable Credentials

    Get PDF
    Public-key based certificates provide a standard way to prove one's identity, as certified by some certificate authority (CA). However, standard certificates provide a binary identification: either the whole identity of the subject is known, or nothing is known. We propose using a Merkle hash tree structure, whereby it is possible for a single certificate to certify many separate claims or attributes, each of which may be proved independently, without revealing the others. Additionally, we demonstrate how trees from multiple sources can be combined together by modifying the tree structure slightly. This allows claims by different authorities, such as an employer or professional organization, to be combined under a single certificate, without the CA needing to know (let alone verify) all of the claims. In addition to describing the hash tree structure and protocols for constructing and verifying our proposed credential, we formally prove that it provides unforgeability and privacy and we present initial performance results demonstrating its efficiency

    Distributed MIMO Interference Cancellation for Interfering Wireless Networks: Protocol and Initial Simulation

    Get PDF
    In this report, the problem of interference in dense wireless network deployments is addressed. Two example scenarios are: 1) overlapping basic service sets (OBSSes) in wireless LAN deployments, and 2) interference among close-by femtocells. The proposed approach is to exploit the interference cancellation and spatial multiplexing capabilities of multiple-input multiple- output (MIMO) links to mitigate interference and improve the performance of such networks. Both semi-distributed and fully distributed protocols for 802.11-based wireless networks standard are presented and evaluated. The philosophy of the approach is to minimize modifications to existing protocols, particularly within client-side devices. Thus, modifications are primarily made at the access points (APs). The semi-distributed protocol was fully implemented within the 802.11 package of ns-3 to evaluate the approach. Simulation results with two APs, and with either one client per AP or two clients per AP, show that within 5 seconds of network operation, our protocol increases the goodput on the downlink by about 50%, as compared against a standard 802.11n implementation

    A Patient-centric, Attribute-based, Source-verifiable Framework for Health Record Sharing

    Get PDF
    The storage of health records in electronic format, and the wide-spread sharing of these records among different health care providers, have enormous potential benefits to the U.S. healthcare system. These benefits include both improving the quality of health care delivered to patients and reducing the costs of delivering that care. However, maintaining the security of electronic health record systems and the privacy of the information they contain is paramount to ensure that patients have confidence in the use of such systems. In this paper, we propose a framework for electronic health record sharing that is patient centric, i.e. it provides patients with substantial control over how their information is shared and with whom; provides for verifiability of original sources of health information and the integrity of the data; and permits fine-grained decisions about when data can be shared based on the use of attribute-based techniques for authorization and access control. We present the architecture of the framework, describe a prototype system we have built based on it, and demonstrate its use within a scenario involving emergency responders' access to health record information

    Translational treatment paradigm for managing nonā€unions secondary to radiation injury utilizing adipose derived stem cells and angiogenic therapy

    Full text link
    BackgroundBony nonā€unions arising in the aftermath of collateral radiation injury are commonly managed with vascularized free tissue transfers. Unfortunately, these procedures are invasive and fraught with attendant morbidities. This study investigated a novel, alternative treatment paradigm utilizing adiposeā€derived stem cells (ASCs) combined with angiogenic deferoxamine (DFO) in the rat mandible.MethodsRats were exposed to a bioequivalent dose of radiation and mandibular osteotomy. Those exhibiting nonā€unions were subsequently treated with surgical debridement alone or debridement plus combination therapy. Radiographic and biomechanical outcomes were assessed after healing.ResultsSignificant increases in biomechanical strength and radiographic metrics were observed in response to combination therapy (p < .05). Importantly, combined therapy enabled a 65% reduction in persisting nonā€unions when compared to debridement alone.ConclusionWe support the continued investigation of this promising combination therapy in its potential translation for the management of radiationā€induced bony pathology. Ā© 2015 Wiley Periodicals, Inc. Head Neck 38: E837ā€“E843, 2016Peer Reviewedhttps://deepblue.lib.umich.edu/bitstream/2027.42/137613/1/hed24110.pd

    CT-T: MedVault-ensuring security and privacy for electronic medical records

    Get PDF
    Issued as final reportNational Science Foundation (U.S.
    • ā€¦
    corecore